Computational study of the step size parameter of the subgradient optimization method
نویسنده
چکیده
The subgradient optimization method is a simple and flexible linear programming iterative algorithm. It is much simpler than Newton’s method and can be applied to a wider variety of problems. It also converges when the objective function is nondifferentiable. Since an efficient algorithm will not only produce a good solution but also take less computing time, we always prefer a simpler algorithm with high quality. In this study a series of step size parameters in the subgradient equation is studied. The performance is compared for a general piecewise function and a specific p-median problem. We examine how the quality of solution changes by setting five forms of step size parameter α.
منابع مشابه
The Practical Performance of Subgradient Computational Techniques for Mesh Network Utility Optimization
In the networking research literature, the problem of network utility optimization is often converted to the dual problem which, due to nondifferentiability, is solved with a particular subgradient technique. This technique is not an ascent scheme, hence each iteration does not necessarily improve the value of the dual function. This paper examines the performance of this computational techniqu...
متن کاملInvestigation of stepped planning hull hydrodynamics using computational fluid dynamics and response surface method
The use of step at the bottom of the hull is one of the effective factors in reducing the resistance and increasing the stability of the Planning hull. The presence of step at the bottom of this type of hulls creates a separation in the flow, which reduces the wet surface on the hull, thus reducing the drag on the body, as well as reducing the dynamic trim. In this study, a design space was cre...
متن کاملUsing Local Surrogate Information in Lagrangean Relaxation: an Application to Symmetric Traveling Salesman Problems
The Traveling Salesman Problem (TSP) is a classical Combinatorial Optimization problem intensively studied. The Lagrangean relaxation was first applied to the TSP in 1970. The Lagrangean relaxation limit approximates what is known today as HK (Held and Karp) bound, a very good bound (less than 1% from optimal) for a large class of symmetric instances. It became a reference bound for new heurist...
متن کاملUsing logical surrogate information in Lagrangean relaxation: An application to symmetric traveling salesman problems
The Traveling Salesman Problem (TSP) is a classical Combinatorial Optimization problem, which has been intensively studied. The Lagrangean relaxation was first applied to the TSP in 1970. The Lagrangean relaxation limit approximates what is known today as HK (Held and Karp) bound, a very good bound (less than 1% from optimal) for a large class of symmetric instances. It became a reference bound...
متن کاملIncremental Stochastic Subgradient Algorithms for Convex Optimization
This paper studies the effect of stochastic errors on two constrained incremental subgradient algorithms. The incremental subgradient algorithms are viewed as decentralized network optimization algorithms as applied to minimize a sum of functions, when each component function is known only to a particular agent of a distributed network. First, the standard cyclic incremental subgradient algorit...
متن کامل